312 research outputs found

    The QUIC Fix for Optimal Video Streaming

    Get PDF
    Within a few years of its introduction, QUIC has gained traction: a significant chunk of traffic is now delivered over QUIC. The networking community is actively engaged in debating the fairness, performance, and applicability of QUIC for various use cases, but these debates are centered around a narrow, common theme: how does the new reliable transport built on top of UDP fare in different scenarios? Support for unreliable delivery in QUIC remains largely unexplored. The option for delivering content unreliably, as in a best-effort model, deserves the QUIC designers' and community's attention. We propose extending QUIC to support unreliable streams and present a simple approach for implementation. We discuss a simple use case of video streaming---an application that dominates the overall Internet traffic---that can leverage the unreliable streams and potentially bring immense benefits to network operators and content providers. To this end, we present a prototype implementation that, by using both the reliable and unreliable streams in QUIC, outperforms both TCP and QUIC in our evaluations.Comment: Published to ACM CoNEXT Workshop on the Evolution, Performance, and Interoperability of QUIC (EPIQ

    On the Benefit of Virtualization: Strategies for Flexible Server Allocation

    Full text link
    Virtualization technology facilitates a dynamic, demand-driven allocation and migration of servers. This paper studies how the flexibility offered by network virtualization can be used to improve Quality-of-Service parameters such as latency, while taking into account allocation costs. A generic use case is considered where both the overall demand issued for a certain service (for example, an SAP application in the cloud, or a gaming application) as well as the origins of the requests change over time (e.g., due to time zone effects or due to user mobility), and we present online and optimal offline strategies to compute the number and location of the servers implementing this service. These algorithms also allow us to study the fundamental benefits of dynamic resource allocation compared to static systems. Our simulation results confirm our expectations that the gain of flexible server allocation is particularly high in scenarios with moderate dynamics

    Revisiting Content Availability in Distributed Online Social Networks

    Get PDF
    Online Social Networks (OSN) are among the most popular applications in today's Internet. Decentralized online social networks (DOSNs), a special class of OSNs, promise better privacy and autonomy than traditional centralized OSNs. However, ensuring availability of content when the content owner is not online remains a major challenge. In this paper, we rely on the structure of the social graphs underlying DOSN for replication. In particular, we propose that friends, who are anyhow interested in the content, are used to replicate the users content. We study the availability of such natural replication schemes via both theoretical analysis as well as simulations based on data from OSN users. We find that the availability of the content increases drastically when compared to the online time of the user, e. g., by a factor of more than 2 for 90% of the users. Thus, with these simple schemes we provide a baseline for any more complicated content replication scheme.Comment: 11pages, 12 figures; Technical report at TU Berlin, Department of Electrical Engineering and Computer Science (ISSN 1436-9915

    Distributed Mega-Datasets: The Need for Novel Computing Primitives

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.With the ongoing digitalization, an increasing number of sensors is becoming part of our digital infrastructure. These sensors produce highly, even globally, distributed data streams. The aggregate data rate of these streams far exceeds local storage and computing capabilities. Yet, for radical new services (e.g., predictive maintenance and autonomous driving), which depend on various control loops, this data needs to be analyzed in a timely fashion. In this position paper, we outline a system architecture that can effectively handle distributed mega-datasets using data aggregation. Hereby, we point out two research challenges: The need for (1) novel computing primitives that allow us to aggregate data at scale across multiple hierarchies (i.e., time and location) while answering a multitude of a priori unknown queries, and (2) transfer optimizations that enable rapid local and global decision making.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    On the importance of Internet eXchange Points for today's Internet ecosystem

    Full text link
    Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network Access Points that were mandated as part of the decommissioning of the NSFNET in 1994/95 to facilitate the transition from the NSFNET to the "public Internet" as we know it today. While this popular view does not tell the whole story behind the early beginnings of IXPs, what is true is that since around 1994, the number of operational IXPs worldwide has grown to more than 300 (as of May 2013), with the largest IXPs handling daily traffic volumes comparable to those carried by the largest Tier-1 ISPs, but IXPs have never really attracted any attention from the networking research community. At first glance, this lack of interest seems understandable as IXPs have apparently little to do with current "hot" topic areas such as data centers and cloud services or software defined networking (SDN) and mobile communication. However, we argue in this article that, in fact, IXPs are all about data centers and cloud services and even SDN and mobile communication and should be of great interest to networking researchers interested in understanding the current and future Internet ecosystem. To this end, we survey the existing but largely unknown sources of publicly available information about IXPs to describe their basic technical and operational aspects and highlight the critical differences among the various IXPs in the different regions of the world, especially in Europe and North America. More importantly, we illustrate the important role that IXPs play in today's Internet ecosystem and discuss how IXP-driven innovation in Europe is shaping and redefining the Internet marketplace, not only in Europe but increasingly so around the world.Comment: 10 pages, keywords: Internet Exchange Point, Internet Architecture, Peering, Content Deliver

    Watching the IPv6 Takeoff from an IXP's Viewpoint

    Get PDF
    The different level of interest in deploying the new Internet address space across network operators has kept IPv6 tardy in its deployment. However, since the last block of IPv4 addresses has been assigned, Internet communities took the concern of the address space scarcity seriously and started to move forward actively. After the successful IPv6 test on 8 June, 2011 (World IPv6 Day [1]), network operators and service/content providers were brought together for preparing the next step of the IPv6 global deployment (World IPv6 Launch on 6 June, 2012 [2]). The main purpose of the event was to permanently enable their IPv6 connectivity. In this paper, based on the Internet traffic collected from a large European Internet Exchange Point (IXP), we present the status of IPv6 traffic mainly focusing on the periods of the two global IPv6 events. Our results show that IPv6 traffic is responsible for a small fraction such as 0.5% of the total traffic in the peak period. Nevertheless, we are positively impressed by the facts that the increase of IPv6 traffic/prefixes shows a steep increase and that the application mix of IPv6 traffic starts to imitate the one of IPv4-dominated Internet

    Online Replication Strategies for Distributed Data Stores

    Get PDF
    The rate at which data is produced at the network edge, e.g., collected from sensors and Internet of Things (IoT) devices, will soon exceed the storage and processing capabilities of a single system and the capacity of the network. Thus, data will need to be collected and preprocessed in distributed data stores - as part of a distributed database - at the network edge. Yet, even in this setup, the transfer of query results will incur prohibitive costs. To further reduce the data transfers, patterns in the workloads must be exploited. Particularly in IoT scenarios, we expect data access to be highly skewed. Most data will be store-only, while a fraction will be popular. Here, the replication of popular, raw data, as opposed to the shipment of partially redundant query results, can reduce the volume of data transfers over the network. In this paper, we design online strategies to decide between replicating data from data stores or forwarding the queries and retrieving their results. Our insight is that by profiling access patterns of the data we can lower the data transfer cost and the corresponding response times. We evaluate the benefit of our strategies using two real-world datasets

    Consistent SDNs through Network State Fuzzing

    Full text link
    The conventional wisdom is that a software-defined network (SDN) operates under the premise that the logically centralized control plane has an accurate representation of the actual data plane state. Unfortunately, bugs, misconfigurations, faults or attacks can introduce inconsistencies that undermine correct operation. Previous work in this area, however, lacks a holistic methodology to tackle this problem and thus, addresses only certain parts of the problem. Yet, the consistency of the overall system is only as good as its least consistent part. Motivated by an analogy of network consistency checking with program testing, we propose to add active probe-based network state fuzzing to our consistency check repertoire. Hereby, our system, PAZZ, combines production traffic with active probes to periodically test if the actual forwarding path and decision elements (on the data plane) correspond to the expected ones (on the control plane). Our insight is that active traffic covers the inconsistency cases beyond the ones identified by passive traffic. PAZZ prototype was built and evaluated on topologies of varying scale and complexity. Our results show that PAZZ requires minimal network resources to detect persistent data plane faults through fuzzing and localize them quickly while outperforming baseline approaches.Comment: Added three extra relevant references, the arXiv later was accepted in IEEE Transactions of Network and Service Management (TNSM), 2019 with the title "Towards Consistent SDNs: A Case for Network State Fuzzing

    Stellar: Network Attack Mitigation using Advanced Blackholing

    Get PDF
    © ACM 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 14th International Conference on Emerging Networking EXperiments and Technologies - CoNEXT ’18, http://dx.doi.org/10.1145/3281411.3281413.Network attacks, including Distributed Denial-of-Service (DDoS), continuously increase in terms of bandwidth along with damage (recent attacks exceed 1.7 Tbps) and have a devastating impact on the targeted companies/governments. Over the years, mitigation techniques, ranging from blackholing to policy-based filtering at routers, and on to traffic scrubbing, have been added to the network operator’s toolbox. Even though these mitigation techniques pro- vide some protection, they either yield severe collateral damage, e.g., dropping legitimate traffic (blackholing), are cost-intensive, or do not scale well for Tbps level attacks (ACL filltering, traffic scrubbing), or require cooperation and sharing of resources (Flowspec). In this paper, we propose Advanced Blackholing and its system realization Stellar. Advanced blackholing builds upon the scalability of blackholing while limiting collateral damage by increasing its granularity. Moreover, Stellar reduces the required level of cooperation to enhance mitigation effectiveness. We show that fine-grained blackholing can be realized, e.g., at a major IXP, by combining available hardware filters with novel signaling mechanisms. We evaluate the scalability and performance of Stellar at a large IXP that interconnects more than 800 networks, exchanges more than 6 Tbps tra c, and witnesses many network attacks every day. Our results show that network attacks, e.g., DDoS amplification attacks, can be successfully mitigated while the networks and services under attack continue to operate untroubled.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNetDFG, FE 570/4-1, Gottfried Wilhelm Leibniz-Preis 201

    Improving Network Troubleshooting using Virtualization

    Get PDF
    Diagnosing problems, deploying new services, testing protocol interactions, or validating network configurations are hard problems in today’s Internet. This paper proposes to leverage the concept of Network Virtualization to overcome such problems: (1) Monitoring VNets can be created on demand along side any production network to enable network-wide monitoring in a robust and cost-efficient manner; (2) Shadow VNets enable troubleshooting as well as safe upgrades to both the software components and their configurations. Both approaches build on the agility and isolation properties of the underlying virtualized infrastructure. Neither requires changes to the physical or logical structure of the production network. Thus, they have the potential to substantially ease network operation and improve resilience against mistakes
    • …
    corecore